14 research outputs found

    Fully automatic binary glioma grading based on pre-therapy MRI using 3D Convolutional Neural Networks

    Get PDF
    The optimal treatment strategy of newly diagnosed glioma is strongly influenced by tumour malignancy. Manual non-invasive grading based on MRI is not always accurate and biopsies to verify diagnosis negatively impact overall survival. In this paper, we propose a fully automatic 3D computer-aided diagnosis (CAD) system to non-invasively differentiate high-grade glioblastoma from lower-grade glioma. The approach consists of an automatic segmentation step to extract the tumour ROI followed by classification using a 3D convolutional neural network. Segmentation was performed using a 3D U-Net achieving a dice score of 88.53% which matches top performing algorithms in the BraTS 2018 challenge. The classification network was trained and evaluated on a large heterogeneous dataset of 549 patients reaching an accuracy of 91%. Additionally, the CAD system was evaluated on data from the Ghent University Hospital and achieved an accuracy of 92% which shows that the algorithm is robust to data from different centres

    Automated MRI based pipeline for glioma segmentation and prediction of grade, IDH mutation and 1p19q co-deletion

    Get PDF
    In the WHO glioma classification guidelines grade, IDH mutation and 1p19q co-deletion play a central role as they are important markers for prognosis and optimal therapy planning. Therefore, we propose a fully automatic, MRI based, 3D pipeline for glioma segmentation and classification. The designed segmentation network was a 3D U-Net achieving an average whole tumor dice score of 90%. After segmentation, the 3D tumor ROI is extracted and fed into the multi-task classification network. The network was trained and evaluated on a large heterogeneous dataset of 628 patients, collected from The Cancer Imaging Archive and BraTS 2019 databases. Additionally, the network was validated on an independent dataset of 110 patients retrospectively acquired at the Ghent University Hospital (GUH). Classification AUC scores are 0.93, 0.94 and 0.82 on the TCIA test data and 0.94, 0.86 and 0.87 on the GUH data for grade, IDH and 1p19q status respectively

    AI in medical imaging

    No full text
    Today, hospitals are producing a staggering amount of digital information, stored into electronic health records. The major volume of healthcare data comes from medical imaging. Due to advances in medical image acquisition, novel imaging procedures are introduced, and the amount of diagnostic imaging is rapidly increasing. Analysing all the images has become a tremendous challenge and a bottleneck for efficient diagnosis, therapy planning and follow-up. At the same time, these large amounts of data also provide opportunities to develop computer-aided image analysis tools that will become indispensable to efficiently extract relevant information. For example, in image segmentation where objects of interest are detected and delineated, Artificial Intelligence (AI) can play an important role. Manual delineations by human experts are tedious and time-consuming and thus impractical in clinical routine, but also suffer from inter- and intra-observer variability. AI and more specifically deep learning algorithms like convolutional neural networks, can perform image segmentation in an automatic and more efficient way. Besides segmentation, AI is being applied to numerous image analysis tasks like image reconstruction, registration, classification, imaging genomics etc. Several challenges remain to be tackled to ensure adoption of AI in clinical routine. Despite the fact that the amount of recorded data is increasing fast, it is scattered across different independent centres with large variations in protocols. Furthermore, the amount of data in imaging remains small in comparison to datasets found in other non-medical computer vision domains. Finally, more research is needed towards explainable AI to understand and trust the developed algorithms

    Binary Glioma grading : radiomics versus pre-trained CNN features

    No full text
    We compare the predictive performance of hand-engineered radiomics features with features extracted through a pre-trained CNN for discriminating glioblastoma from lower- grade glioma. The BRATS 2017 database was used containing MRI data of 285 patients. State-of-the-art performance was achieved (AUC of 96.4%) with radiomics features extracted from manually segmented tumour volumes. With pre-trained CNN features extracted from the tumour bounding box, an AUC of 93.5% was obtained

    Artificial intelligence with deep learning in nuclear medicine and radiology

    No full text
    The use of deep learning in medical imaging has increased rapidly over the past few years, finding applications throughout the entire radiology pipeline, from improved scanner performance to automatic disease detection and diagnosis. These advancements have resulted in a wide variety of deep learning approaches being developed, solving unique challenges for various imaging modalities. This paper provides a review on these developments from a technical point of view, categorizing the different methodologies and summarizing their implementation. We provide an introduction to the design of neural networks and their training procedure, after which we take an extended look at their uses in medical imaging. We cover the different sections of the radiology pipeline, highlighting some influential works and discussing the merits and limitations of deep learning approaches compared to other traditional methods. As such, this review is intended to provide a broad yet concise overview for the interested reader, facilitating adoption and interdisciplinary research of deep learning in the field of medical imaging

    Mitigating the adverse effect of compton scatter on the positioning of gamma interactions in large monolithic pet detectors

    No full text
    In a typical monolithic PET detector setup, scintillation light is captured by an array of photodetectors from which the first interaction position is estimated. This is necessary to draw an accurate line of response. However, a majority of gamma rays undergo one or more Compton interactions before photoelectric interaction. For these events, it is more difficult to recover the first interaction position. In this study we use optical simulation data and neural networks to understand and mitigate the degrading effect of Compton scatter on positioning accuracy. A neural network was trained to predict the 3D first interaction position. Additionally, a network was trained to classify events into three classes: events scattered over a 2D distance smaller than 1 mm (class 0), between 1 mm and 5 mm (class 1) and further than 5 mm (class 2). Finally, a pipeline was designed where events are first classified with the scatter detection network and subsequently discarded (class 2) or positioned with separate networks trained for class 0 and 1. With one neural network trained for all events, an average 3D positioning error of 1.5 mm and FWHM of 0.49 mm is achieved. The scatter detection network achieves an overall accuracy of 65%. Through the combination of scatter detection and separate positioning neural networks for class 0 and 1, the average 3D positioning error reduces with 0.29 mm. Hence, we show that an improvement of about 20% can be achieved through the inclusion of Compton scatter detection. The ultimate goal is to apply the presented methodology to experimental data
    corecore